Goto

Collaborating Authors

 performance requirement


Light over Heavy: Automated Performance Requirements Quantification with Linguistic Inducement

Wang, Shihai, Chen, Tao

arXiv.org Artificial Intelligence

Elicited performance requirements need to be quantified for compliance in different engineering tasks, e.g., configuration tuning and performance testing. Much existing work has relied on manual quantification, which is expensive and error-prone due to the imprecision. In this paper, we present LQPR, a highly efficient automatic approach for performance requirements quantification.LQPR relies on a new theoretical framework that converts quantification as a classification problem. Despite the prevalent applications of Large Language Models (LLMs) for requirement analytics, LQPR takes a different perspective to address the classification: we observed that performance requirements can exhibit strong patterns and are often short/concise, therefore we design a lightweight linguistically induced matching mechanism. We compare LQPR against nine state-of-the-art learning-based approaches over diverse datasets, demonstrating that it is ranked as the sole best for 75% or more cases with two orders less cost. Our work proves that, at least for performance requirement quantification, specialized methods can be more suitable than the general LLM-driven approaches.


CoTune: Co-evolutionary Configuration Tuning

Xiong, Gangda, Chen, Tao

arXiv.org Artificial Intelligence

To automatically tune configurations for the best possible system performance (e.g., runtime or throughput), much work has been focused on designing intelligent heuristics in a tuner. However, existing tuner designs have mostly ignored the presence of complex performance requirements (e.g., the latency shall ideally be 2 seconds), but simply assume that better performance is always more preferred. This would not only waste valuable information in a requirement but might also consume extensive resources to tune for a goal with little gain. Yet, prior studies have shown that simply incorporating the requirement as a tuning objective is problematic since the requirement might be too strict, harming convergence; or its highly diverse satisfactions might lead to premature convergence. In this paper, we propose CoTune, a tool that takes the information of a given target performance requirement into account through co-evolution. CoTune is unique in the sense that it creates an auxiliary performance requirement to be co-evolved with the configurations, which assists the target performance requirement when it becomes ineffective or even misleading, hence allowing the tuning to be guided by the requirement while being robust to its harm. Experiment results on 162 cases (nine systems and 18 requirements) reveal that CoTune considerably outperforms existing tuners, ranking as the best for 90% cases (against the 0%--35% for other tuners) with up to 2.9x overall improvements, while doing so under a much better efficiency.


Robustness Requirement Coverage using a Situation Coverage Approach for Vision-based AI Systems

Shahbeigi, Sepeedeh, Proma, Nawshin Mannan, Hodge, Victoria, Hawkins, Richard, Li, Boda, Donzella, Valentina

arXiv.org Artificial Intelligence

AI-based robots and vehicles are expected to operate safely in complex and dynamic environments, even in the presence of component degradation. In such systems, perception relies on sensors such as cameras to capture environmental data, which is then processed by AI models to support decision-making. However, degradation in sensor performance directly impacts input data quality and can impair AI inference. Specifying safety requirements for all possible sensor degradation scenarios leads to unmanageable complexity and inevitable gaps. In this position paper, we present a novel framework that integrates camera noise factor identification with situation coverage analysis to systematically elicit robustness-related safety requirements for AI-based perception systems. We focus specifically on camera degradation in the automotive domain. Building on an existing framework for identifying degradation modes, we propose involving domain, sensor, and safety experts, and incorporating Operational Design Domain specifications to extend the degradation model by incorporating noise factors relevant to AI performance. Situation coverage analysis is then applied to identify representative operational contexts. This work marks an initial step toward integrating noise factor analysis and situational coverage to support principled formulation and completeness assessment of robustness requirements for camera-based AI perception.


Comparison between External and Internal Single Stage Planetary gearbox actuators for legged robots

Singh, Aman, Kapa, Deepak, Chedda, Prasham, Kolathaya, Shishir N. Y.

arXiv.org Artificial Intelligence

Legged robots, such as quadrupeds and humanoids, require high-performance actuators for efficient locomotion. Quasi-Direct-Drive (QDD) actuators with single-stage planetary gearboxes offer low inertia, high efficiency, and transparency. Among planetary gearbox architectures, Internal (ISSPG) and External Single-Stage Planetary Gearbox (ESSPG) are the two predominant designs. While ISSPG is often preferred for its compactness and high torque density at certain gear ratios, no objective comparison between the two architectures exists. Additionally, existing designs rely on heuristics rather than systematic optimization. This paper presents a design framework for optimally selecting actuator parameters based on given performance requirements and motor specifications. Using this framework, we generate and analyze various optimized gearbox designs for both architectures. Our results demonstrate that for the T-motor U12, ISSPG is the superior choice within the lower gear ratio range of 5:1 to 7:1, offering a lighter design. However, for gear ratios exceeding 7:1, ISSPG becomes infeasible, making ESSPG the better option in the 7:1 to 11:1 range. To validate our approach, we designed and optimized two actuators for manufacturing: an ISSPG with a 6.0:1 gear ratio and an ESSPG with a 7.2:1 gear ratio. Their respective masses closely align with our optimization model predictions, confirming the effectiveness of our methodology.


A Quantitative Autonomy Quantification Framework for Fully Autonomous Robotic Systems

Gyagenda, Nasser, Roth, Hubert

arXiv.org Artificial Intelligence

Although autonomous functioning facilitates deployment of robotic systems in domains that admit limited human oversight on our planet and beyond, finding correspondence between task requirements and autonomous capability is still an open challenge. Consequently, a number of methods for quantifying autonomy have been proposed over the last three decades, but to our knowledge all these have no discernment of sub-mode features of variation of autonomy and some are based on metrics that violet the Goodhart's law. This paper focuses on the full autonomous mode and proposes a task-requirements based autonomy assessment framework. The framework starts by establishing robot task characteristics from which three autonomy metrics, namely requisite capability, reliability and responsiveness, and functions for determining autonomy as a two-part measure, namely of level of autonomy and degree of autonomy are derived. These characteristics are founded on the realization that robots ultimately replace human skilled workers, to find a mapping between human job and robot task characteristics. The distinction between level and degree of autonomy stemmed from the acknowledgment that autonomy is not just a question of existence, but also one of performance of requisite capability. When continuously monitored, the proposed metrics provide a means of monitoring the integrity of a system. The framework has been demonstrated on two case studies, namely autonomous vehicle at an on-road dynamic driving task and the DARPA subT challenge rules analysis. The framework provides not only a tool for quantifying autonomy, but also a regulatory interface and common language for autonomous systems developers and users.


Adaptive Workload Distribution for Accuracy-aware DNN Inference on Collaborative Edge Platforms

Taufique, Zain, Miele, Antonio, Liljeberg, Pasi, Kanduri, Anil

arXiv.org Artificial Intelligence

DNN inference can be accelerated by distributing the workload among a cluster of collaborative edge nodes. Heterogeneity among edge devices and accuracy-performance trade-offs of DNN models present a complex exploration space while catering to the inference performance requirements. In this work, we propose adaptive workload distribution for DNN inference, jointly considering node-level heterogeneity of edge devices, and application-specific accuracy and performance requirements. Our proposed approach combinatorially optimizes heterogeneity-aware workload partitioning and dynamic accuracy configuration of DNN models to ensure performance and accuracy guarantees. We tested our approach on an edge cluster of Odroid XU4, Raspberry Pi4, and Jetson Nano boards and achieved an average gain of 41.52% in performance and 5.2% in output accuracy as compared to state-of-the-art workload distribution strategies.


QTWTL: Quality Aware Time Window Temporal Logic for Performance Monitoring

Bonnah, Ernest, Hoque, Khaza Anuarul

arXiv.org Artificial Intelligence

In various service-oriented applications such as distributed autonomous delivery, healthcare, tourism, transportation, and many others, where service agents need to perform serial and time-bounded tasks to achieve their goals, quality of service must constantly be assured. In addition to safety requirements, such agents also need to fulfill performance requirements in order to satisfy their quality of service. This paper proposes the novel quality-aware time window temporal logic (QTWTL) by extending the traditional time window temporal logic (TWTL) with two operators for counting and aggregation operations. We also propose offline runtime monitoring algorithms for the performance monitoring of QTWTL specifications. To analyze the feasibility and efficiency of our proposed approach, we generate a large number of traces using the New York City Taxi and Limousine Commission Trip Record data, formalize their performance requirements using QTWTL, and monitor them using the proposed algorithms. The obtained results show that the monitoring algorithm has a linear space and time complexity with respect to the number of traces monitored.


Do Performance Aspirations Matter for Guiding Software Configuration Tuning?

Chen, Tao, Li, Miqing

arXiv.org Artificial Intelligence

Configurable software systems can be tuned for better performance. Leveraging on some Pareto optimizers, recent work has shifted from tuning for a single, time-related performance objective to two intrinsically different objectives that assess distinct performance aspects of the system, each with varying aspirations. Before we design better optimizers, a crucial engineering decision to make therein is how to handle the performance requirements with clear aspirations in the tuning process. For this, the community takes two alternative optimization models: either quantifying and incorporating the aspirations into the search objectives that guide the tuning, or not considering the aspirations during the search but purely using them in the later decision-making process only. However, despite being a crucial decision that determines how an optimizer can be designed and tailored, there is a rather limited understanding of which optimization model should be chosen under what particular circumstance, and why. In this paper, we seek to close this gap. Firstly, we do that through a review of over 426 papers in the literature and 14 real-world requirements datasets. Drawing on these, we then conduct a comprehensive empirical study that covers 15 combinations of the state-of-the-art performance requirement patterns, four types of aspiration space, three Pareto optimizers, and eight real-world systems/environments, leading to 1,296 cases of investigation. We found that (1) the realism of aspirations is the key factor that determines whether they should be used to guide the tuning; (2) the given patterns and the position of the realistic aspirations in the objective landscape are less important for the choice, but they do matter to the extents of improvement; (3) the available tuning budget can also influence the choice for unrealistic aspirations but it is insignificant under realistic ones.


Situation-Aware Environment Perception Using a Multi-Layer Attention Map

Henning, Matti, Müller, Johannes, Gies, Fabian, Buchholz, Michael, Dietmayer, Klaus

arXiv.org Artificial Intelligence

Within the field of automated driving, a clear trend in environment perception tends towards more sensors, higher redundancy, and overall increase in computational power. This is mainly driven by the paradigm to perceive the entire environment as best as possible at all times. However, due to the ongoing rise in functional complexity, compromises have to be considered to ensure real-time capabilities of the perception system. In this work, we introduce a concept for situation-aware environment perception to control the resource allocation towards processing relevant areas within the data as well as towards employing only a subset of functional modules for environment perception, if sufficient for the current driving task. Specifically, we propose to evaluate the context of an automated vehicle to derive a multi-layer attention map (MLAM) that defines relevant areas. Using this MLAM, the optimum of active functional modules is dynamically configured and intra-module processing of only relevant data is enforced. We outline the feasibility of application of our concept using real-world data in a straight-forward implementation for our system at hand. While retaining overall functionality, we achieve a reduction of accumulated processing time of 59%.


Pairing Conceptual Modeling with Machine Learning

Maass, Wolfgang, Storey, Veda C.

arXiv.org Artificial Intelligence

Both conceptual modeling and machine learning have long been recognized as important areas of research. With the increasing emphasis on digitizing and processing large amounts of data for business and other applications, it would be helpful to consider how these areas of research can complement each other. To understand how they can be paired, we provide an overview of machine learning foundations and development cycle. We then examine how conceptual modeling can be applied to machine learning and propose a framework for incorporating conceptual modeling into data science projects. The framework is illustrated by applying it to a healthcare application. For the inverse pairing, machine learning can impact conceptual modeling through text and rule mining, as well as knowledge graphs. The pairing of conceptual modeling and machine learning in this this way should help lay the foundations for future research.